143 research outputs found

    Impact of Transceiver Impairments on the Capacity of Dual-Hop Relay Massive MIMO Systems

    Get PDF
    Despite the deleterious effect of hardware impairments on communication systems, most prior works have not investigated their impact on widely used relay systems. Most importantly, the application of inexpensive transceivers, being prone to hardware impairments, is the most cost-efficient way for the implementation of massive multiple-input multiple-output (MIMO) systems. Consequently, the direction of this paper is towards the investigation of the impact of hardware impairments on MIMO relay networks with large number of antennas. Specifically, we obtain the general expression for the ergodic capacity of dual-hop (DH) amplify-and-forward (AF) relay systems. Next, given the advantages of the free probability (FP) theory with comparison to other known techniques in the area of large random matrix theory, we pursue a large limit analysis in terms of number of antennas and users by shedding light to the behavior of relay systems inflicted by hardware impairments.Comment: 6 pages, 4 figures, accepted in IEEE Global Communications Conference (GLOBECOM 2015) - Workshop on Massive MIMO: From theory to practice, 201

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Sensing-Throughput Tradeoff for Interweave Cognitive Radio System: A Deployment-Centric Viewpoint

    Get PDF
    Secondary access to the licensed spectrum is viable only if interference is avoided at the primary system. In this regard, different paradigms have been conceptualized in the existing literature. Of these, Interweave Systems (ISs) that employ spectrum sensing have been widely investigated. Baseline models investigated in the literature characterize the performance of IS in terms of a sensing-throughput tradeoff, however, this characterization assumes the knowledge of the involved channels at the secondary transmitter, which is unavailable in practice. Motivated by this fact, we establish a novel approach that incorporates channel estimation in the system model, and consequently investigate the impact of imperfect channel estimation on the performance of the IS. More particularly, the variation induced in the detection probability affects the detector's performance at the secondary transmitter, which may result in severe interference at the primary users. In this view, we propose to employ average and outage constraints on the detection probability, in order to capture the performance of the IS. Our analysis reveals that with an appropriate choice of the estimation time determined by the proposed model, the degradation in performance of the IS can be effectively controlled, and subsequently the achievable secondary throughput can be significantly enhanced.Comment: 13 pages, 10 figures, Accepted to be published in IEEE Transactions on Wireless Communication

    Multiple Access Techniques for Next Generation Wireless: Recent Advances and Future Perspectives

    Get PDF
    The advances in multiple access techniques has been one of the key drivers in moving from one cellular generation to another. Starting from the first generation, several multiple access techniques have been explored in different generations and various emerging multiplexing/multiple access techniques are being investigated for the next generation of cellular networks. In this context, this paper first provides a detailed review on the existing Space Division Multiple Access (SDMA) related works. Subsequently, it highlights the main features and the drawbacks of various existing and emerging multiplexing/multiple access techniques. Finally, we propose a novel concept of clustered orthogonal signature division multiple access for the next generation of cellular networks. The proposed concept envisions to employ joint antenna coding in order to enhance the orthogonality of SDMA beams with the objective of enhancing the spectral efficiency of future cellular networks

    Live Data Analytics with Collaborative Edge and Cloud Processing in Wireless IoT Network

    Get PDF
    Recently, big data analytics has received important attention in a variety of application domains including business, finance, space science, healthcare, telecommunication and Internet of Things (IoT). Among these areas, IoT is considered as an important platform in bringing people, processes, data and things/objects together in order to enhance the quality of our everyday lives. However, the key challenges are how to effectively extract useful features from the massive amount of heterogeneous data generated by resource-constrained IoT devices in order to provide real-time information and feedback to the endusers, and how to utilize this data-aware intelligence in enhancing the performance of wireless IoT networks. Although there are parallel advances in cloud computing and edge computing for addressing some issues in data analytics, they have their own benefits and limitations. The convergence of these two computing paradigms, i.e., massive virtually shared pool of computing and storage resources from the cloud and real-time data processing by edge computing, could effectively enable live data analytics in wireless IoT networks. In this regard, we propose a novel framework for coordinated processing between edge and cloud computing/processing by integrating advantages from both the platforms. The proposed framework can exploit the network-wide knowledge and historical information available at the cloud center to guide edge computing units towards satisfying various performance requirements of heterogeneous wireless IoT networks. Starting with the main features, key enablers and the challenges of big data analytics, we provide various synergies and distinctions between cloud and edge processing. More importantly, we identify and describe the potential key enablers for the proposed edge-cloud collaborative framework, the associated key challenges and some interesting future research directions

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions

    Impact of Residual Additive Transceiver Hardware Impairments on Rayleigh-Product MIMO Channels with Linear Receivers : Exact and Asymptotic Analyses

    Get PDF
    © 2017 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Despite the importance of Rayleigh-product multiple-input multiple-output channels and their experimental validations, there is no work investigating their performance in the presence of residual additive transceiver hardware impairments, which arise in practical scenarios. Hence, this paper focuses on the impact of these residual imperfections on the ergodic channel capacity for optimal receivers, and on the ergodic sum rates for linear minimum mean-squared-error (MMSE) receivers. Moreover, the low- A nd high-signal-to-noise ratio cornerstones are characterized for both types of receivers. Simple closed-form expressions are obtained that allow the extraction of interesting conclusions. For example, the minimum transmit energy per information bit for optimal and MMSE receivers is not subject to any additive impairments. In addition to the exact analysis, we also study the Rayleigh-product channels in the large system regime, and we elaborate on the behavior of the ergodic channel capacity with optimal receivers by varying the severity of the transceiver additive impairments.Peer reviewedFinal Accepted Versio

    Collaborative Distributed Q-Learning for RACH Congestion Minimization in Cellular IoT Networks

    Get PDF
    Due to infrequent and massive concurrent access requests from the ever-increasing number of machine-type communication (MTC) devices, the existing contention-based random access (RA) protocols, such as slotted ALOHA, suffer from the severe problem of random access channel (RACH) congestion in emerging cellular IoT networks. To address this issue, we propose a novel collaborative distributed Q-learning mechanism for the resource-constrained MTC devices in order to enable them to find unique RA slots for their transmissions so that the number of possible collisions can be significantly reduced. In contrast to the independent Q-learning scheme, the proposed approach utilizes the congestion level of RA slots as the global cost during the learning process and thus can notably lower the learning time for the low-end MTC devices. Our results show that the proposed learning scheme can significantly minimize the RACH congestion in cellular IoT networks
    corecore